FEA Best Practices Vol 1

 

 

FINITE ELEMENT ANALYSIS

BEST PRACTICES

 

Volume 1: Building the Model

 

Joe McFadden

McFaddenCAE.com



 

When we poke something, it responds.

How it responds tells us everything about what it is.

That’s not just philosophy—it’s the entire foundation of simulation. Think about it. You tap a wine glass with a fork and it rings at a specific pitch. That pitch isn’t random. It comes from the glass’s shape, its material, its thickness, how much wine is in it. Everything that glass is—its geometry, its material properties, its boundary conditions—determines how it responds when you poke it.

Now here’s what’s remarkable. A finite element simulation does exactly the same thing. We build a mathematical representation of a physical system—its geometry, its materials, how it’s supported, how it’s connected—and then we poke it. We apply a load, an acceleration, a vibration environment. And the software tells us how the system responds.

But here’s the catch. The quality of that response depends entirely on how well we built the representation. If we get the geometry wrong, the materials wrong, the connections wrong—we’re not poking the real system. We’re poking something else. And the response we get back tells us about that something else, not the thing we actually care about.

That’s what this series is about. Not which buttons to click. Not which keywords to type. It’s about understanding what we’re actually doing when we simulate—and what it takes to do it honestly.

This is Volume 1, and we’re going to talk about the foundations. The things that have to be right before you run anything. If these are wrong, nothing downstream can fix it—not a fancy solver, not a fine mesh, not a powerful computer. We’re going to cover four topics: unit systems, material definitions, element types, and mesh convergence. Each one builds on the last, and together they form the DNA of every model you’ll ever build.

Let’s start with something that sounds almost too basic to talk about—units.

Unit Systems — The Invisible Foundation

I can tell you from experience—unit errors are one of the most common and most devastating mistakes in finite element analysis. And here’s what makes them dangerous: a unit error doesn’t give you an error message. It gives you results that look perfectly reasonable but are completely wrong.

Here’s why. Abaqus—and this applies to most major FEA codes—has no built-in unit system. It doesn’t know whether you’re working in millimeters or meters, tonnes or kilograms. It just does math on the numbers you give it. If those numbers aren’t internally consistent, your results will be garbage. And they’ll look perfectly normal. No warnings. No errors. Just wrong answers that you might put in a report and send to a customer.

So what does “consistent” mean?

In any unit system, F = ma must hold. Once you pick your length unit, your mass unit, and your time unit, everything else is locked in—force, stress, energy, density. You don’t get to choose them independently. If your length is millimeters and your mass is tonnes and your time is seconds, then force comes out in Newtons, stress in Megapascals, and density in tonnes per cubic millimeter. That’s not a choice. That’s a consequence.

The four systems you’ll encounter most often:

System 1. SI Standard: meters, kilograms, seconds. Force in Newtons, stress in Pascals, density in kilograms per cubic meter. Steel density is 7,850. Young’s modulus is about 2 × 10¹¹. The numbers work, but they’re inconveniently large for stress and inconveniently small for typical part dimensions measured in millimeters.

System 2. Millimeters, tonnes, seconds—and this is the most popular for structural analysis. Force still in Newtons, stress in Megapascals, density in tonnes per cubic millimeter. Here’s where it gets tricky—steel density in this system is 7.85 × 10⁻⁹. That tiny number is incredibly easy to get wrong. Young’s modulus is 210,000 MPa. Natural frequencies come out directly in Hertz, which is convenient for dynamics work.

System 3. Millimeters, kilograms, milliseconds—common for explicit dynamics and crash. Force in kilonewtons, stress in Gigapascals, steel density is 7.85 × 10⁻⁶. The millisecond time base is convenient for short-duration impact events.

System 4. Inches, pounds-force, seconds—the US customary system, sometimes called IPS. This is extremely common in American aerospace, defense, and legacy industries. Force in pounds-force, stress in PSI (pounds per square inch)—and this is where it gets genuinely confusing.

The mass unit is not pounds-mass. Remember—F = ma must hold. If force is in pounds-force, length is in inches, and time is in seconds, then mass must be in lbf·s²/in. That unit doesn’t have a widely used name. Some people call it a “slinch”—a mashup of slug and inch. Others have called it a “blob,” or even a “snail”—not to be confused with the French delicacy or the garden pest. The more familiar term is the slug, which is the foot-based version (lbf·s²/ft)—coined around 1900 by British physicist Arthur Mason Worthington. Despite what you might think, it has nothing to do with the creature. It comes from the old meaning of slug as a solid block of metal—like a bullet slug. Most engineers just call it the IPS mass unit and move on. The critical point is that it is not pounds-mass.

Steel density in this system is about 7.35 × 10⁻⁴, in units of lbf·s²/in⁴. That number looks nothing like the 490 lb/ft³ or 0.284 lbm/in³ you’ll find in a materials handbook. You have to divide the handbook density by gravitational acceleration—386.4 in/s²—to get the correct FEA input. This single conversion is probably the number one source of imperial unit errors in finite element models.

Young’s modulus of steel is about 29 × 10⁶ PSI. Natural frequencies still come out in Hertz. Gravity is 386.4 in/s²—not 32.2. That 32.2 is feet per second squared. If your model is in inches and you type 32.2, your gravity is off by a factor of twelve.

Identifying the Unit System

Now, here’s the practical question: when you open someone else’s model—and in the real world, you’re constantly receiving vendor models, legacy models, models from other teams—how do you know which system they used?

Look at the material properties. They’re your best clue. If density of steel is around 7,850, you’re in SI meters. If it’s around 7.85 × 10⁻⁹, you’re in millimeters-tonnes-seconds. If it’s around 7.35 × 10⁻⁴, you’re in inches-pounds-force-seconds. If Young’s modulus is around 200 billion, you’re in SI. If it’s around 210,000, you’re in millimeters with stress in Megapascals. If it’s around 29 million, you’re in inches with stress in PSI. The material properties are the fingerprint of the unit system.

The Danger of Mixing Units

The most dangerous scenario—and I’ve seen this too many times—is mixing units. You build geometry in millimeters because that’s what the CAD model uses. But you enter material properties from a handbook that’s in SI meters. Now your Young’s modulus is off by a factor of a million. Your stresses are off by a million. And nothing in the software flags this. Nothing.

Some specific traps:

Density in the millimeter-tonne-second system is a very small number. People see “7850” in a reference table and type 7850 into their model. But in millimeter-tonne-second, it should be 7.85 × 10⁻⁹. That makes the part a trillion times too heavy. Your natural frequencies will be absurdly low. Your dynamic response will be completely fictional.

The imperial version of this trap is just as bad. A materials handbook says steel density is 0.284 lbm/in³. Someone types 0.284 into the model. But in the inches-pounds-force-seconds system, density must be in lbf·s²/in⁴—which means dividing by 386.4. The correct value is 7.35 × 10⁻⁴. Entering 0.284 makes the part about 386 times too heavy. Everything dynamic is wrong by a factor of about 20—the square root of 386.

Gravity is another one. If your model is in millimeters, gravity is 9,810 mm/s², not 9.81. Get that wrong and every gravity-loaded analysis is off by a factor of a thousand. In imperial, gravity is 386.4 in/s². Not 32.2—that’s feet. If you’re in inches and you type 32.2, everything is off by a factor of twelve.

And boundary condition magnitudes. A 1,000-Newton force in millimeter-tonne-second is 1,000 Newtons because force happens to be in Newtons. But in millimeter-kilogram-millisecond, force is in kilonewtons, so typing 1,000 means a million Newtons—a thousand times too much.

My recommendation: pick one unit system for your entire team and stick with it. Document it in every model. For most structural and dynamic work, millimeters-tonnes-seconds is the best choice. Create a reference card with material properties already converted. And always—always—verify the density value first when you inherit someone else’s model. It immediately tells you their unit system, and it immediately tells you if something is wrong.

Material Definition — The Nature of the System

Remember what I said at the beginning—when we poke a system, its response reveals its nature. Well, the material properties are a huge part of that nature. The stiffness matrix comes from the elastic properties. The mass matrix comes from the density. If either one is wrong, you’re not simulating your system. You’re simulating a fictional system that happens to have the same shape.

Materials are the foundation of any simulation. If your material definitions are wrong, nothing else matters—your mesh, your boundary conditions, your solver settings—none of it can compensate for incorrect material data.

Let me walk through what every material needs.

At a minimum: elastic properties and density. Young’s modulus and Poisson’s ratio define how the material resists deformation. Density defines how it resists acceleration. For steel, Young’s modulus is about 200,000 to 210,000 MPa in the millimeter-tonne-second system, or about 29 × 10⁶ PSI in imperial. Poisson’s ratio is typically 0.28 to 0.32 for metals, 0.35 to 0.45 for polymers.

For rubber and other nearly incompressible materials, Poisson’s ratio approaches 0.5—but never use exactly 0.5. That causes numerical problems because it implies zero volume change, which creates a mathematical singularity. Use 0.495 or 0.499 instead.

Density deserves special emphasis. Without density, your material has zero mass. That means no inertia in dynamic analyses. Gravity loads produce nothing. Natural frequency calculations are impossible—remember, the eigenvalue problem involves both the stiffness matrix and the mass matrix. No mass, no eigenvalues, no frequencies.

My advice: always define density, even for static analyses. It costs nothing and enables mass reporting, which is a valuable sanity check on your model. If your model’s total mass doesn’t match the physical part, something is wrong—either the geometry, the density, or both.

Beyond the Basics

Beyond the basics, what you need depends on what you’re asking the simulation to do.

Plasticity—the stress-versus-plastic-strain curve—is required for any analysis involving permanent deformation. Crash, metal forming, drop tests beyond the elastic limit. But here’s a critical detail: the data must be true stress versus true plastic strain. Not engineering stress-strain. This is a very common error, and it matters because at large strains, the difference between engineering and true values is significant.

And here’s something fundamental that connects back to the different analysis types we’ll cover in Volume 2. Plasticity is only active in general analysis steps. In perturbation steps—modal analysis, harmonic response, random vibration—the solver uses only the elastic stiffness. It completely ignores your plasticity data. Your rubber material’s full nonlinear curve, your metal’s complete hardening behavior—none of it matters during a perturbation procedure. The solver sees Young’s modulus and Poisson’s ratio, period.

This is not a bug. It’s a mathematical requirement. Perturbation procedures solve a linearized problem, and nonlinear material behavior has no place in a linear formulation. We’ll explore this deeply in Volume 2 when we talk about modal analysis and the perturbation family.

For crash and impact, you may need damage and failure criteria. Without them, your material deforms forever without breaking, which is unrealistic for severe loading. Ductile damage, Johnson-Cook damage, and shear damage are common options.

For high-speed events, material properties change with strain rate—metals get stronger when deformed quickly. The Johnson-Cook model handles this, combining strain hardening, strain rate sensitivity, and thermal softening. If you’re simulating a drop test or a crash and your material doesn’t include rate dependence, you may be underestimating the material’s resistance to deformation.

For rubber, silicone, and foams—materials that undergo large deformation—use hyperelastic models instead of simple elasticity. Mooney-Rivlin with two constants is most common and works to about 100% strain. For larger strains, Ogden or Arruda-Boyce provide more flexibility.

Common Material Mistakes

Common material mistakes, and I’ve seen every one of these in production models: engineering stress-strain entered where true stress-strain is required; missing density—especially in models converted from static to dynamic analysis; Poisson’s ratio of exactly 0.5 causing numerical blow-up; plastic strain entered as a percentage—Abaqus wants 0.05, not 5; and forgetting rate dependence in impact analyses.

Think about it this way. The material properties define the nature of your system. Get the stiffness wrong, and every frequency, every stress, every displacement is wrong. Get the density wrong, and every dynamic result is wrong. Get the plasticity wrong, and every prediction of permanent deformation is wrong. The error doesn’t average out. It propagates through everything.

Element Types — Discretizing Reality

Choosing the right element is one of the most impactful decisions you’ll make in a finite element model. The wrong element type can give you inaccurate results, slow performance, or both. And this is where the art of simulation starts to show—because there’s no single right answer. The best element depends on your geometry, your analysis type, and what quantities you need from the results.

Abaqus has over a hundred element types, but in practice, about ten of them cover 90% of real-world work. Understanding those core elements and when to choose each one is what matters.

Elements fall into families. Solid elements—also called continuum elements—model three-dimensional volumes. These are the most common and the most intuitive. Shell elements model thin-walled structures like sheet metal, aircraft skins, and vehicle body panels. Beam elements model slender structural members—frames, trusses, scaffolding. And there are specialty elements for membranes, springs, connectors, and more.

Solid Elements

Let’s focus on solid elements first, since they’re what you’ll encounter most. The two workhorses are C3D8R and C3D10M.

C3D8R—the 8-node linear hexahedral with reduced integration. The R means reduced integration: one integration point instead of eight. This makes it fast and resistant to a numerical problem called locking, where elements become artificially stiff in bending or when the material is nearly incompressible. The trade-off is that reduced integration elements can suffer from hourglassing—a zero-energy deformation mode that produces wavy, non-physical stress patterns. Abaqus applies hourglass control by default, and we’ll talk about hourglassing in detail in Volume 4. C3D8R is the preferred element for explicit dynamics—crash, impact, drop tests.

C3D10M—the 10-node modified quadratic tetrahedral. The M means modified formulation, specifically designed to work well in contact problems and with nearly incompressible materials. Tets are easier to mesh than hexes—any geometry, no matter how complex, can be tet-meshed automatically. C3D10M is your go-to when you can’t get a clean hex mesh, which honestly is most real-world geometries. It works well in both implicit and explicit analysis.

Now—avoid C3D4, the 4-node linear tet, whenever possible. It’s far too stiff in bending and gives poor stress results unless you use an extremely fine mesh. If you open a vendor model and it’s full of C3D4 elements, that’s a red flag. The results from that model need to be treated with suspicion until verified.

C3D8—the full integration hex without the R—has the opposite problem from C3D8R. It’s overly stiff in bending due to shear locking. If you need full integration hexes, use C3D8I—the incompatible modes version. The incompatible modes add internal degrees of freedom that eliminate shear locking.

C3D20R—the 20-node reduced integration quadratic hex—gives excellent accuracy but at significant cost. Use it when you need high-quality stress results and can afford the computation time. Not recommended for contact problems.

Shell Elements

For shell elements, S4R—the 4-node reduced integration shell—is the default and works well for most thin-walled structures. S3 is the triangular shell for meshing curved surfaces. Use S4R wherever possible and fill in with S3 triangles where geometry demands it.

The shell-versus-solid decision comes down to geometry. If the thickness is less than about one-tenth of the other dimensions, shell elements are appropriate and much more efficient. If the thickness is comparable to the other dimensions, or you need the through-thickness stress distribution, use solid elements.

One specialty element worth knowing: SC8R, the continuum shell. It looks like a solid element—it has volume, it has nodes on top and bottom faces—but it behaves like a shell. This is excellent for layered composites where you model the stacking sequence explicitly.

Rules of Thumb

A few rules of thumb to carry with you. Quadratic elements give better stress accuracy per element than linear, but cost more. Reduced integration elements are faster and resist locking, but need hourglass control. Hexes are better than tets of the same order, but tets are far easier to mesh. And for contact problems, stick with C3D8R or C3D10M—higher-order elements can have contact stability issues.

Think of element selection as part of the same philosophy we started with. Every element type makes assumptions about how the material deforms within its boundaries. A linear element assumes simple deformation patterns. A quadratic element captures more complexity. A shell element assumes the structure is thin. Each assumption is either compatible with your physical reality or it isn’t. Choosing the right element means choosing assumptions that match the physics—not choosing what’s easiest to set up.

Mesh Convergence — The Proof That Your Results Are Real

This might sound academic, but mesh convergence is one of the most practically important quality assurance steps in finite element analysis. Without a convergence study, you simply don’t know whether your results are accurate. You’re trusting the first answer the computer gives you, and that’s not engineering—that’s hope.

Here’s the fundamental problem. In FEA, we approximate a continuous structure with discrete elements. More elements means a better approximation—but also more expensive computation. The question is: how many elements are enough? A mesh convergence study answers that systematically.

The concept is straightforward. Run your analysis with a coarse mesh and note the result you care about—peak stress, maximum displacement, a natural frequency. Then refine the mesh and run again. Compare. If results changed significantly, refine further. Keep going until results stabilize. When doubling mesh density changes your result by less than 2–5%, you’ve converged.

The Practical Approach

First—and this is critical—identify the specific quantity you’re converging on. You don’t converge “the model.” You converge a specific result that drives your engineering decision. Peak stress at a fillet. Maximum displacement at a mounting point. First natural frequency. Choose what matters.

Start with the coarsest reasonable mesh. Run it. Record the target quantity, the element count, and the solve time. Then refine—and here’s where judgment comes in. Global refinement doubles elements everywhere, which is simple but expensive. Local refinement increases density only where you need it—near stress concentrations, contact zones, load application points, fillets, holes. Local refinement is almost always the right approach.

A typical study might look like this. Mesh 1: 5,000 elements, peak stress 150 MPa. Mesh 2: 15,000 elements, peak stress 185 MPa—a 19% change. Not converged. Mesh 3: 40,000 elements, peak stress 195—a 5% change, getting close. Mesh 4: 100,000 elements, peak stress 198—a 1.5% change. Now you’re converged. Report the result from Mesh 4 with confidence.

Subtleties That Trip People Up

Stress converges slower than displacement. Displacements are integrated quantities—errors tend to average out across the model. Stresses are local quantities, especially at concentrations, and they depend heavily on mesh density right at the location of interest. A mesh that gives beautifully converged displacements might still give unconverged stresses. If stress is what drives your design decision, you need a finer mesh than if displacement does.

Stress singularities are locations where the theoretical stress is infinite—sharp re-entrant corners, point loads, crack tips. At these locations, stress never converges. As you add elements, the peak stress just keeps climbing. This isn’t a mesh quality problem—it’s the physics of the mathematical idealization. The real structure has a small fillet, not an infinitely sharp corner. Either model the fillet, or use stress averaged over a small region, or accept that the number at that exact point isn’t meaningful. Don’t chase a converged value that doesn’t exist.

For dynamic analyses, natural frequencies typically converge faster than stresses. But higher-order mode shapes can be sensitive to mesh density. And for transient dynamics like drop tests, the overall time history shape converges quickly, but peak values—the numbers you actually report—need more refinement.

For perturbation analyses specifically—modal, harmonic, random vibration—convergence behavior follows the same principles, but because these procedures are computationally cheaper per run, you can afford more iterations. Make sure the modes themselves are converged before feeding them into a downstream analysis like random vibration. If your modal frequencies aren’t stable, the random vibration response built on those frequencies isn’t stable either.

Element quality matters too. Convergence assumes reasonably shaped elements. If your elements are highly distorted—large aspect ratios, skewed angles, warped faces—they converge slowly or not at all. Always check element quality metrics after meshing. A fine mesh of terrible elements can be worse than a coarser mesh of good ones.

And one practical note for explicit dynamics: remember that mesh refinement has a cubic relationship with solve time. If you double the number of elements in each direction, you get eight times the elements—but the stable time increment halves, so solve time increases by a factor of roughly sixteen. Mesh refinement in explicit is expensive. Use local refinement aggressively and refine globally only when you must.

The most powerful way to present a convergence study: plot your target quantity versus element count or element size. The curve should flatten as it approaches the converged value. That visual gives immediate, undeniable confidence—or it shows you that you haven’t converged yet and need to keep refining.

Bringing It Together

Let me step back and connect what we’ve covered.

We started with a philosophy: when we poke a system, its response reveals its nature. A simulation is a mathematical poke, and the quality of what we learn depends entirely on how faithfully we’ve represented the system.

Unit consistency is the invisible foundation. Get it wrong and every number in every output is fiction—with no warning.

Material properties define the system’s nature—its stiffness, its mass, its capacity for permanent deformation. Without correct materials, you’re simulating something that doesn’t exist.

Element types determine how you discretize continuous reality into solvable pieces. Each type makes assumptions about the physics, and those assumptions must match your problem.

And mesh convergence is the proof—the systematic demonstration that your discretization is fine enough to give you an answer you can trust.

These four topics aren’t separate checklists. They’re layers of the same question: have I built a model that faithfully represents the system I’m asking about?

Get the units wrong, and the material properties are wrong. Get the material wrong, and the stiffness and mass matrices are wrong. Choose the wrong element, and the numerical approximation doesn’t match the physics. Use too coarse a mesh, and the approximation isn’t refined enough to resolve the answer. Each layer depends on the one before it.

In Volume 2, we’ll pick up from here and explore what happens when we start poking these models—specifically, the family of linear perturbation analyses that reveal a structure’s natural dynamic character. Modal analysis, harmonic response, random vibration, and shock response spectrum—all connected, all powerful, and all governed by a set of constraints that you must respect.

But that conversation starts from here—from a model with consistent units, correct materials, appropriate elements, and a converged mesh. Without these foundations, the dynamic results we compute in Volume 2 don’t describe reality. They describe a mistake. 

Next
Next

FEA Best Practices Vol 2